A digital twin is defined as a virtual representation of a physical asset enabled through data and simulators for real-time prediction, optimization, monitoring, controlling, and improved decision-making. Unfortunately, the term remains vague and says little about its capability. Recently, the concept of capability level has been introduced to address this issue. Based on its capability, the concept states that a digital twin can be categorized on a scale from zero to five, referred to as standalone, descriptive, diagnostic, predictive, prescriptive, and autonomous, respectively. The current work introduces the concept in the context of the built environment. It demonstrates the concept by using a modern house as a use case. The house is equipped with an array of sensors that collect timeseries data regarding the internal state of the house. Together with physics-based and data-driven models, these data are used to develop digital twins at different capability levels demonstrated in virtual reality. The work, in addition to presenting a blueprint for developing digital twins, also provided future research directions to enhance the technology.
translated by 谷歌翻译
基于物理学的模型已成为流体动力学的主流,用于开发预测模型。近年来,由于数据科学,处理单元,基于神经网络的技术和传感器适应性的快速发展,机器学习为流体社区提供了复兴。到目前为止,在流体动力学中的许多应用中,机器学习方法主要集中在标准过程上,该过程需要将培训数据集中在指定机器或数据中心上。在这封信中,我们提出了一种联合机器学习方法,该方法使本地化客户能够协作学习一个汇总和共享的预测模型,同时将所有培训数据保留在每个边缘设备上。我们证明了这种分散学习方法的可行性和前景,并努力为重建时空领域建立深度学习的替代模型。我们的结果表明,联合机器学习可能是设计与流体动力学相关的高度准确预测分散的数字双胞胎的可行工具。
translated by 谷歌翻译
数字双胞胎是一个代孕模型,具有反映原始过程行为的主要功能。将动力学过程与降低复杂性的数字双模型相关联具有很大的优势,可以将动力学以高精度和CPU时间和硬件的成本降低到遭受重大变化的时间表,因此很难探索。本文介绍了一个新的框架,用于创建有效的数字双流体流量流量。我们介绍了一种新型算法,该算法结合了基于Krylov的动态模式分解的优势和正确的正交分解,并优于选择最有影响力的模式。我们证明,随机正交分解算法提供了比SVD经验正交分解方法的几个优点,并减轻了对多目标优化问题的投影误差。我们涉及最先进的艺术人工智能(DL)以执行实时的实时学习(DL)数字双胞胎模型的自适应校准,富裕性的增加。该输出是流体流动动力学的高保真数字双数据数据模型,具有降低的复杂性。在三波现象的数值模拟中,随着复杂性的增加,研究了新的建模工具。我们表明,输出与原始源数据一致。我们在数值准确性和计算效率方面对新数字数据模型的性能进行彻底评估,包括时间模拟响应功能研究。
translated by 谷歌翻译
即将到来的技术,例如涉及安全至关重要应用的数字双胞胎,自主和人工智能系统,需要准确,可解释,计算上有效且可推广的模型。不幸的是,两种最常用的建模方法,基于物理学的建模(PBM)和数据驱动的建模(DDM)无法满足所有这些要求。在当前的工作中,我们演示了将最佳PBM和DDM结合的混合方法如何导致模型可以胜过两者的模型。我们这样做是通过基于第一原则与黑匣子DDM相结合的偏微分方程,在这种情况下,深度神经网络模型补偿了未知物理。首先,我们提出了一个数学论点,说明为什么这种方法应该起作用,然后将混合方法应用于未知的源项模拟二维热扩散问题。结果证明了该方法在准确性和概括性方面的出色性能。此外,它显示了如何在混合框架中解释DDM部分以使整体方法可靠。
translated by 谷歌翻译
自治系统正在成为海洋部门内无处不在和获得势头。由于运输的电气化同时发生,自主海洋船只可以降低环境影响,降低成本并提高效率。虽然仍然需要密切的监控以确保安全,但最终目标是完全自主权。一个主要的里程碑是开发一个控制系统,这足以处理任何也稳健和可靠的天气和遇到。此外,控制系统必须遵守防止海上碰撞的国际法规,以便与人类水手进行成功互动。由于Colregs被编写为人类思想来解释,因此它们以暧昧的散文写成,因此不能获得机器可读或可核实。由于这些挑战和各种情况进行了解决,古典模型的方法证明了实现和计算沉重的复杂性。在机器学习(ML)内,深增强学习(DRL)对广泛的应用表现出了很大的潜力。 DRL的无模型和自学特性使其成为自治船只的有希望的候选人。在这项工作中,使用碰撞风险理论将Colregs的子集合在于基于DRL的路径和障碍物避免系统。由此产生的自主代理在训练场景中的训练场景,孤立的遇难情况和基于AIS的真实情景模拟中动态地插值。
translated by 谷歌翻译
AutoEncoder技术在减少秩序建模中发现越来越常见的用途作为创建潜在空间的手段。这种缩小的订单表示为与时间序列预测模型集成时的非线性动力系统提供了模块化数据驱动建模方法。在这封信中,我们提出了一个非线性适当的正交分解(POD)框架,它是一个端到端的Galerkin的模型,组合AutoEncoders,用于动态的长短期内存网络。通过消除由于Galerkin模型的截断导致的投影误差,所提出的非流体方法的关键推动器是在POD系数的全级扩展和动态发展的潜空间之间的非线性映射的运动结构。我们测试我们的模型减少对流主导系统的框架,这通常是针对减少订单模型的具有挑战性。我们的方法不仅提高了准确性,而且显着降低了培训和测试的计算成本。
translated by 谷歌翻译
在这项工作中,我们介绍,证明并展示了纠正源期限方法(Costa) - 一种新的混合分析和建模(火腿)的新方法。 HAM的目标是将基于物理的建模(PBM)和数据驱动的建模(DDM)组合,以创建概括,值得信赖,准确,计算高效和自我不断发展的模型。 Costa通过使用深神经网络产生的纠正源期限增强PBM模型的控制方程来实现这一目标。在一系列关于一维热扩散的数值实验中,发现CostA在精度方面优于相当的DDM和PBM模型 - 通常通过几个数量级降低预测误差 - 同时也比纯DDM更好地概括。由于其灵活而稳定的理论基础,Costa提供了一种模块化框架,用于利用PBM和DDM中的新颖开发。其理论基础还确保了哥斯达队可以用来模拟由(确定性)部分微分方程所控制的任何系统。此外,Costa有助于在PBM的背景下解释DNN生成的源术语,这导致DNN的解释性改善。这些因素使哥斯达成为数据驱动技术的潜在门开启者,以进入先前为纯PBM保留的高赌注应用。
translated by 谷歌翻译
Advances in reinforcement learning have led to its successful application in complex tasks with continuous state and action spaces. Despite these advances in practice, most theoretical work pertains to finite state and action spaces. We propose building a theoretical understanding of continuous state and action spaces by employing a geometric lens. Central to our work is the idea that the transition dynamics induce a low dimensional manifold of reachable states embedded in the high-dimensional nominal state space. We prove that, under certain conditions, the dimensionality of this manifold is at most the dimensionality of the action space plus one. This is the first result of its kind, linking the geometry of the state space to the dimensionality of the action space. We empirically corroborate this upper bound for four MuJoCo environments. We further demonstrate the applicability of our result by learning a policy in this low dimensional representation. To do so we introduce an algorithm that learns a mapping to a low dimensional representation, as a narrow hidden layer of a deep neural network, in tandem with the policy using DDPG. Our experiments show that a policy learnt this way perform on par or better for four MuJoCo control suite tasks.
translated by 谷歌翻译
Hierarchical Reinforcement Learning (HRL) algorithms have been demonstrated to perform well on high-dimensional decision making and robotic control tasks. However, because they solely optimize for rewards, the agent tends to search the same space redundantly. This problem reduces the speed of learning and achieved reward. In this work, we present an Off-Policy HRL algorithm that maximizes entropy for efficient exploration. The algorithm learns a temporally abstracted low-level policy and is able to explore broadly through the addition of entropy to the high-level. The novelty of this work is the theoretical motivation of adding entropy to the RL objective in the HRL setting. We empirically show that the entropy can be added to both levels if the Kullback-Leibler (KL) divergence between consecutive updates of the low-level policy is sufficiently small. We performed an ablative study to analyze the effects of entropy on hierarchy, in which adding entropy to high-level emerged as the most desirable configuration. Furthermore, a higher temperature in the low-level leads to Q-value overestimation and increases the stochasticity of the environment that the high-level operates on, making learning more challenging. Our method, SHIRO, surpasses state-of-the-art performance on a range of simulated robotic control benchmark tasks and requires minimal tuning.
translated by 谷歌翻译
Instruction tuning enables pretrained language models to perform new tasks from inference-time natural language descriptions. These approaches rely on vast amounts of human supervision in the form of crowdsourced datasets or user interactions. In this work, we introduce Unnatural Instructions: a large dataset of creative and diverse instructions, collected with virtually no human labor. We collect 64,000 examples by prompting a language model with three seed examples of instructions and eliciting a fourth. This set is then expanded by prompting the model to rephrase each instruction, creating a total of approximately 240,000 examples of instructions, inputs, and outputs. Experiments show that despite containing a fair amount of noise, training on Unnatural Instructions rivals the effectiveness of training on open-source manually-curated datasets, surpassing the performance of models such as T0++ and Tk-Instruct across various benchmarks. These results demonstrate the potential of model-generated data as a cost-effective alternative to crowdsourcing for dataset expansion and diversification.
translated by 谷歌翻译